|
Sensor fusion is combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. The term ''uncertainty reduction'' in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).〔Haghighat, M. B. A., Aghagolzadeh, A., & Seyedarabi, H. (2011). (Multi-focus image fusion for visual sensor networks in DCT domain ). Computers & Electrical Engineering, 37(5), 789-797.〕 The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish ''direct fusion'', ''indirect fusion'' and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like ''a priori'' knowledge about the environment and human input. Sensor fusion is also known as ''(multi-sensor) Data fusion'' and is a subset of ''information fusion''. Sensory fusion is simply defined as the unification of visual excitations from corresponding retinal images into a single visual perception a single visual image. Single vision is the hallmark of retinal correspondence Double vision is the hallmark of retinal disparity == Examples of sensors == * Radar * Sonar and other acoustic * Infra-red / thermal imaging camera * TV cameras * Sonobuoys * Seismic sensors * Magnetic sensors * Electronic Support Measures (ESM) * Phased array * MEMS * Accelerometers * Global Positioning System (GPS) 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Sensor fusion」の詳細全文を読む スポンサード リンク
|